8 research outputs found
Toward a Human-Centered Uml for Risk Analysis
Safety is now a major concern in many complex systems such as medical robots.
A way to control the complexity of such systems is to manage risk. The first
and important step of this activity is risk analysis. During risk analysis, two
main studies concerning human factors must be integrated: task analysis and
human error analysis. This multidisciplinary analysis often leads to a work
sharing between several stakeholders who use their own languages and
techniques. This often produces consistency errors and understanding
difficulties between them. Hence, this paper proposes to treat the risk
analysis on the common expression language UML (Unified Modeling Language) and
to handle human factors concepts for task analysis and human error analysis
based on the features of this language. The approach is applied to the
development of a medical robot for teleechography
SENA: Similarity-based Error-checking of Neural Activations
International audienceIn this work, we propose SENA, a run-time monitor focused on detecting unreliable predictions from machine learning (ML) classifiers. The main idea is that instead of trying to detect when an image is out-of-distribution (OOD), which will not always result in a wrong output, we focus on detecting if the prediction from the ML model is not reliable, which will most of the time result in a wrong output, independently of whether it is in-distribution (ID) or OOD. The verification is done by checking the similarity between the neural activations of an incoming input and a set of representative neural activations recorded during training. SENA uses information from true-positive and false-negative examples collected during training to verify if a prediction is reliable or not. Our approach achieves results comparable to state-of-the-art solutions without requiring any prior OOD information and without hyperparameter tuning. Besides, the code is publicly available for easy reproducibility at https://github.com/raulsenaferreira/SENA
SENA: Similarity-based Error-checking of Neural Activations
International audienceIn this work, we propose SENA, a run-time monitor focused on detecting unreliable predictions from machine learning (ML) classifiers. The main idea is that instead of trying to detect when an image is out-of-distribution (OOD), which will not always result in a wrong output, we focus on detecting if the prediction from the ML model is not reliable, which will most of the time result in a wrong output, independently of whether it is in-distribution (ID) or OOD. The verification is done by checking the similarity between the neural activations of an incoming input and a set of representative neural activations recorded during training. SENA uses information from true-positive and false-negative examples collected during training to verify if a prediction is reliable or not. Our approach achieves results comparable to state-of-the-art solutions without requiring any prior OOD information and without hyperparameter tuning. Besides, the code is publicly available for easy reproducibility at https://github.com/raulsenaferreira/SENA
SENA: Similarity-based Error-checking of Neural Activations
International audienceIn this work, we propose SENA, a run-time monitor focused on detecting unreliable predictions from machine learning (ML) classifiers. The main idea is that instead of trying to detect when an image is out-of-distribution (OOD), which will not always result in a wrong output, we focus on detecting if the prediction from the ML model is not reliable, which will most of the time result in a wrong output, independently of whether it is in-distribution (ID) or OOD. The verification is done by checking the similarity between the neural activations of an incoming input and a set of representative neural activations recorded during training. SENA uses information from true-positive and false-negative examples collected during training to verify if a prediction is reliable or not. Our approach achieves results comparable to state-of-the-art solutions without requiring any prior OOD information and without hyperparameter tuning. Besides, the code is publicly available for easy reproducibility at https://github.com/raulsenaferreira/SENA
ARCH-COMP22 Category Report: Stochastic Models
This report presents the results of a friendly competition for formal verification and policy synthesis of stochastic models. It also introduces new benchmarks and their properties within this category and recommends next steps for this category towards next year’s edition of the competition. In comparison with tools on non-probabilistic models, the tools for stochastic models are at the early stages of development that do not allow full competition on a standard set of benchmarks. We report on an initiative to collect a set of minimal benchmarks that all such tools can run, thus facilitating the comparison between efficiency of the implemented techniques. The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in Summer 2022
ARCH-COMP22 Category Report: Stochastic Models
This report presents the results of a friendly competition for formal verification and policy synthesis of stochastic models. It also introduces new benchmarks and their properties within this category and recommends next steps for this category towards next year’s edition of the competition. In comparison with tools on non-probabilistic models, the tools for stochastic models are at the early stages of development that do not allow full competition on a standard set of benchmarks. We report on an initiative to collect a set of minimal benchmarks that all such tools can run, thus facilitating the comparison between efficiency of the implemented techniques. The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in Summer 2022